Skip to main content

Use GPT-3 in Python With the OpenAI API and W&B Tables

This is a guide to using GPT-3 with Python to help you get going. We'll start with some background on GPT-3 before diving into the code — and we'll be using W&B
Created on May 11|Last edited on June 30
GPT-3 is a powerful model developed by OpenAI that can be applied to many natural language processing tasks.
Typically, OpenAI's Playground is a great place to get started with, well, playing around with model predictions and seeing how they work.
For example, here's GPT-3 talking about itself:


However, once you get a bit more serious with the model, you may want to start deploying it in Python as a part of some project or application.
In this article, I'd like to show you the easiest way to get started with using GPT-3 in Python as well as how you can keep track of the different model predictions you get and interactively explore them with W&B Tables.
If you aren't familiar with our Tables feature, it is one of the tools in the ecosystem that allows us to interactively explore tabular data in our browser. We'll see how that helps us when working with GPT-3.
Here's what we'll be covering:

Table of Contents



Let's dive in!

What Can You Use GPT-3 For?

It's hard to number the exact volume of tasks that GPT-3 is useful for. It can be used for Question Answering, Text Summarization, Sequence Classification, Translation, and Text Generation to name just a few tasks — however, its application are far broader than this.
If you need some inspiration, I recently wrote another piece titled 5 Creative Tips for GPT-3 Prompt Engineering, where I share some cool ways to use GPT-3, including summary, devil's advocate, and question answering.
OpenAI itself provides a great number of examples of all sorts of different GPT-3 tasks here. What's nice for us is that all of those come with Python API code snippets.

So, let's dive into using GPT-3 in Python!

The Code

If we want to use OpenAI API in Python, we have to start off by adding our API key as an environmental variable. You get an API key upon registering on OpenAI's website, and it can be found here.
%env OPENAI_API_KEY=YOUR_OPENAI_API_KEY
You shouldn't normally show other people your API key, I will create a new one after finishing this article.
💡
Next up, we'll install two dependencies: OpenAI's Python client and wandb, a Weights & Biases Python client, to explore GPT-3's predictions.
!pip install --upgrade openai wandb
Then, we'll import the modules and pass our API key to OpenAI's Python client.
import os
import openai
import wandb

openai.api_key = os.getenv("OPENAI_API_KEY")
Now that we've done that, we can initialize a new Weights & Biases project named GPT-3 in Python as well as a new W&B Table – which is a tool for interactively exploring tabular data – with two columns: prompt and completion.
run = wandb.init(project='GPT-3 in Python')
prediction_table = wandb.Table(columns=["prompt", "completion"])
Here's the bread and butter of this tutorial: in the next chunk of code, we're actually sending our prompt via the API to get a GPT-3 prediction! (OpenAI handles all of the computations of the massive GPT-3 model on its end, so it doesn't really matter what hardware you have. So no worries about that.)
gpt_prompt = "Correct this to standard English:\n\nShe no went to the market."

response = openai.Completion.create(
engine="text-davinci-002",
prompt=gpt_prompt,
temperature=0.5,
max_tokens=256,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0
)

print(response['choices'][0]['text'])

prediction_table.add_data(gpt_prompt,response['choices'][0]['text'])
Note that GPT-3 model has a request/response hard limit of 2049 "tokens." Here, a "token" is described as approximately 4 characters of text. Which means that it may not be able process prompts or generate responses that are encyclopedic in length, but it may actually generate similar-sized articles to the one you're reading right now.
💡
So, in the engine="text-davinci-002" line, we are specifying which variation of GPT-3 we want to use. The most capable one currently is text-davinci-002, but there are also smaller (and cheaper) ones available; you can learn more about the different GPT-3 engines in the OpenAI docs here.
Then, we have some hyperparameters that we need to specify.
  • temperature controls randomness; lowering it will result in fewer random completions.
  • max_tokens is the maximum number of tokens to generate a response for.
You can use API Reference docs to learn more about other hyperparameters.

Visualizing GPT-3 Model Predictions

After we perform inference on our prompt, we extract the text from the OpenAI's API response and add it to our W&B Table using this line of code.
prediction_table.add_data(gpt_prompt,response['choices'][0]['text'])
We can repeat this process however times we want - maybe even make it run in a loop on an array of prompts - and keep adding data to the table in that way.
And after we finish with our experiments, we can call wandb.log to log our W&B Table and be able to interactively visualize it in the browser:
wandb.log({'predictions': prediction_table})
wandb.finish()
This way, you'll be sure never to lose track of your GPT-3 prompt-and-completion experiments! 😉
Here's an example of what I got after playing for a little bit with GPT-3 and feeding it by hand with various prompts in Python.



Want To Get More Out of GPT-3?

If you're working on an industry-specific custom application, say, you want GPT-3 to help answer people's questions about your product, you may want to look into fine-tuning GPT-3!
Weights & Biases and OpenAI have collaborated together, and now you can track your fine-tune jobs within W&B with just one line of OpenAI Python CLI code.

If you want to learn more about that, feel free to read this in-depth article I wrote about fine-tuning GPT-3 to generate new Doctor Who episode synopses, or you may enjoy this video I made about fine-tuning GPT-3 too!




Conclusion

Thank you for reading, and I hope this article helps you use GPT-3 in Python and that you enjoyed it! Happy generating! 😊
Hrach Amirjanyan
Hrach Amirjanyan •  
thanks
Reply
Iterate on AI agents and models faster. Try Weights & Biases today.